fix(sync): bound entry key size in validate_entry to prevent replica exhaustion#94
fix(sync): bound entry key size in validate_entry to prevent replica exhaustion#94emrul wants to merge 1 commit into
Conversation
|
Thanks, this looks sensible. Can you fix the formatting lints and rebase on main? Also, if you could edit the PR description to have a # Breaking changes noting the behavior change, and maybe collapse some of the other sections into less headers. No need for the test plan, we only merge PRs where all tests pass. |
Thank you - yes I can do that this week (currently travelling so bear with me!) |
…tion
Sync-peer-supplied entries were accepted with arbitrarily large keys,
bounded only by the codec's MAX_MESSAGE_SIZE (1 GiB). `validate_entry`
did not check key length, so a peer that opened a sync session could
push entries with huge keys and force every replica that synced them
to persist them — a peer-controllable memory and disk amplifier.
Note: the key-size check has to live in `validate_entry` (or an
equivalent post-deserialization hook), not in `RecordIdentifier::new`,
because `RecordIdentifier` derives `Deserialize` and peer-supplied
entries reach the replica via serde, bypassing the constructor
entirely.
Fix: add `MAX_ENTRY_KEY_SIZE = 4096` and reject any entry whose key
exceeds it with a new `ValidationFailure::KeyTooLarge { actual, max }`.
The check runs before signature verification so oversized entries are
rejected without spending crypto work. 4 KiB is deliberately generous
for legitimate use cases (keys are short identifiers / paths); tunable
if upstream prefers a different limit.
Reproduction: the added `test_peer_entry_with_oversized_key_rejected`
test builds a 1 MiB key into a `SignedEntry` and feeds it through
`insert_remote_entry`. Before the bound the test's predecessor
(`test_peer_entry_with_oversized_key_accepted`, run against upstream)
confirmed the entry was accepted. With the bound in place the test
asserts `KeyTooLarge` is returned, and that a key at exactly
`MAX_ENTRY_KEY_SIZE` is still accepted.
7bd068b to
d5faea1
Compare
|
@Frando - followed the PR template and updated per your feedback. Hope its good now and thank you! |
Description
validate_entrydoes not check the length of an entry's key. The codec'sMAX_MESSAGE_SIZE(1 GiB) is the only implicit upper bound, so a sync peer can submit signed entries with keys up to roughly 1 GiB. Every replica that syncs such an entry persists it — a peer-controllable memory / disk amplifier.This PR adds
MAX_ENTRY_KEY_SIZE = 4096and rejects entries whose key exceeds it with a newValidationFailure::KeyTooLarge { actual, max }. The check runs before signature verification so oversized entries are dropped without spending ed25519 crypto work. 4 KiB is deliberately generous for legitimate uses (paths, UUIDs, short binary keys); the constant is trivial to tune.The added
test_peer_entry_with_oversized_key_rejecteddrives the peer path viaReplica::insert_remote_entry: a 1 MiB key is rejected withKeyTooLarge, a key of exactlyMAX_ENTRY_KEY_SIZEis accepted. I confirmed the same test body returnsOkon upstreammainbefore the fix, so the vector is reachable today.Breaking Changes
Entries submitted via sync (or any path that goes through
validate_entry) whose key exceeds 4 KiB are now rejected withValidationFailure::KeyTooLarge { actual, max }. Previously they were accepted up to roughly 1 GiB. Users that legitimately use keys larger than 4 KiB will need to shorten them or recompile with adjustedMAX_ENTRY_KEY_SIZE.ValidationFailurealso gains a newKeyTooLargevariant so code doing exhaustive matches will need to be updated.Notes & open questions
Change checklist